专利摘要:
Method to decode a video. Video decoding method comprising: generating prediction samples based on an intra-prediction mode of a current block and determining whether to apply an update process to the prediction samples of the current block, in which when determining whether to apply the update process, the prediction samples in the current block are updated based on their respective offset, in which in a first sub-region in the current block an offset is adaptively determined, in a second sub-region in the current block a offset to zero and in which a pattern formed by the first sub-region and the second sub-region is different when the intra-prediction mode of the current block is a non-directional mode and when it is a directional mode. (Machine-translation by Google Translate, not legally binding)
公开号:ES2844525A2
申请号:ES202031130
申请日:2016-09-12
公开日:2021-07-22
发明作者:Keun Lee Bae;Young Kim Joo
申请人:KT Corp;
IPC主号:
专利说明:

[0003] Technical field
[0004] The present invention relates to a method and a device for processing a video signal.
[0006] Background of the technique
[0007] At present, requests for high-resolution and high-quality images such as high-definition (HD) images and ultra-high-definition (UHD) images have increased in various fields of application. However, higher image quality and resolution data has more and more amounts of data compared to conventional image data. Therefore, when image data is transmitted using a medium such as conventional wired and wireless networks, or when image data is stored using a conventional storage medium, the cost of transmission and storage increases. To solve these problems which occur with an increase in resolution and quality of image data, high efficiency image encoding / decoding techniques can be used.
[0009] Image compression technology includes various techniques, including: a prediction inter-prediction technique of a pixel value included in a current snapshot from an earlier or later snapshot of the current snapshot; an intraprediction technique of predicting a pixel value included in a current snapshot using pixel information in the current snapshot; an entropy coding technique of assigning a short code to a value with a high frequency of occurrence and of assigning a long code to a value with a low frequency of occurrence; etc. Image data can be efficiently compressed using such image compression technology, and can be transmitted or stored.
[0011] Meanwhile, with requests for high-resolution images, requests for stereographic imaging content, which is a new imaging service, have also increased. A video compression technique is being explored to efficiently deliver high-resolution and ultra-high-resolution stereographic image content.
[0012] Divulgation
[0014] Technical problem
[0016] An object of the present invention is intended to provide a method and device for encoding / decoding a video signal, the method and device hierarchically dividing an encoding block.
[0018] An object of the present invention is intended to provide a method and device for encoding / decoding a video signal, the method and device performing intra-prediction of a target encoding / decoding block.
[0020] An object of the present invention is intended to provide a method and device for encoding / decoding a video signal, the method and device correcting a prediction sample of a target encoding / decoding block.
[0022] An object of the present invention is intended to provide a method and device for encoding / decoding a video signal, the method and device updating the first prediction sample generated via intraprediction to the second prediction sample using offset.
[0024] Technical solution
[0026] In accordance with the present invention, there is provided a method and device for decoding a video signal, the method including: generating a first prediction sample by performing intra-prediction on a current block; determining an intra-prediction pattern that specifies a pattern in which the current block is divided into sub-blocks; determining the offset in sub-block units of the current block based on the intra-prediction pattern; and generating a second prediction sample in sub-block units of the current block using the first prediction sample and offset.
[0028] In the method and device for decoding a video signal according to the present invention, the current block can include multiple sub-blocks, and it can be determined whether or not the offset is assigned for each sub-block.
[0030] In the method and device for decoding a video signal according to the present invention, it can be determined whether or not to assign the offset to a sub-block based on in a sub-block position.
[0032] In the method and device for decoding a video signal according to the present invention, the current block can include multiple sub-blocks, and a different value of the offset can be assigned to each sub-block.
[0034] In the method and device for decoding a video signal according to the present invention, the offset can be derived from a reference sample adjacent to the current block.
[0036] In accordance with the present invention, there is provided a method and device for encoding a video signal, the method including: generating a first prediction sample by performing intra-prediction on a current block; determining an intra-prediction pattern that specifies a pattern in which the current block is divided into sub-blocks; determining the offset in sub-block units of the current block based on the intra-prediction pattern; and generating a second prediction sample in sub-block units of the current block using the first prediction sample and offset.
[0038] In the method and device for encoding a video signal according to the present invention, the current block can include multiple sub-blocks, and it can be determined whether or not the offset is assigned to each sub-block.
[0040] In the method and device for encoding a video signal according to the present invention, it can be determined whether or not to assign the offset to a sub-block based on a position of the sub-block.
[0042] In the method and device for encoding a video signal according to the present invention, the current block can include multiple sub-blocks, and a different value of the offset can be assigned to each sub-block.
[0044] In the method and device for encoding a video signal according to the present invention, the offset can be derived from a reference sample adjacent to the current block.
[0046] Advantageous effects
[0048] According to the present invention, it is possible to improve the coding efficiency through hierarchical / adaptive division of a coding block.
[0050] According to the present invention, it is possible to efficiently determine an intra-prediction mode of a target encoding / decoding block, and to improve the intra-prediction accuracy.
[0051] Description of the drawings
[0052] Figure 1 is a block diagram illustrating a device for encoding a video in accordance with an embodiment of the present invention.
[0054] Figure 2 is a block diagram illustrating a device for decoding a video in accordance with an embodiment of the present invention.
[0056] Figure 3 is a view illustrating an example of hierarchical division of a coding block based on a tree structure according to an embodiment of the present invention.
[0058] Figure 4 is a view illustrating types of predefined intra-prediction modes for a device for encoding / decoding a video according to an embodiment of the present invention.
[0060] Figure 5 is a flow chart briefly illustrating an intraprediction procedure in accordance with one embodiment of the present invention.
[0062] FIG. 6 is a view illustrating a method of correcting a prediction sample of a current block based on differential information from neighboring samples in accordance with an embodiment of the present invention.
[0064] Figures 7 and 8 are views illustrating a method of correcting a prediction sample based on a predetermined correction filter in accordance with an embodiment of the present invention.
[0066] Figure 9 is a view illustrating a method of correcting a prediction sample using weight and displacement in accordance with one embodiment of the present invention.
[0068] Figures 10 to 15 are views illustrating a method of compounding a template for determining weight w in accordance with one embodiment of the present invention.
[0070] Figure 16 is a view illustrating a correction procedure for a sample of prediction based on offset in accordance with an embodiment of the present invention.
[0072] Figures 17 to 21 are views illustrating examples of an intraprediction pattern of a current block in accordance with one embodiment of the present invention.
[0074] Figure 22 is a view illustrating a method of performing prediction using an intra-block copy technique in accordance with one embodiment of the present invention.
[0076] Figure 23 is a flow chart illustrating a symbol encoding procedure.
[0078] FIG. 24 is a view illustrating an example of dividing an interval by [0,1) into sub-intervals based on a probability of occurrence of a symbol.
[0080] Figure 25 is a view illustrating an example of setting a probability index depending on a position of a block to be encoded.
[0082] Figures 26 and 27 are views illustrating examples of part division and cut segment.
[0084] Figure 28 is a view illustrating an example of determining an initial probability index for each part differently.
[0086] Best mode
[0088] In accordance with the present invention, there is provided a method and device for decoding a video signal, the method including: generating a first prediction sample by performing intra-prediction on a current block; determining an intra-prediction pattern that specifies a pattern in which the current block is divided into sub-blocks; determining the offset in sub-block units of the current block based on the intra-prediction pattern; and generating a second prediction sample in sub-block units of the current block using the first prediction sample and offset.
[0090] In the method and device for decoding a video signal according to the present invention, the current block can include multiple sub-blocks, and it can be determined whether or not the offset is assigned to each sub-block.
[0092] In the procedure and device for decoding a video signal in accordance with the present In the invention, it can be determined whether or not to assign the offset to a sub-block based on a position of the sub-block.
[0094] In the method and device for decoding a video signal according to the present invention, the current block can include multiple sub-blocks, and a different value of the offset can be assigned to each sub-block.
[0096] In the method and device for decoding a video signal according to the present invention, the offset can be derived from a reference sample adjacent to the current block.
[0098] In accordance with the present invention, there is provided a method and device for encoding a video signal, the method including: generating a first prediction sample by performing intra-prediction on a current block; determining an intra-prediction pattern that specifies a pattern in which the current block is divided into sub-blocks; determining the offset in sub-block units of the current block based on the intra-prediction pattern; and generating a second prediction sample in sub-block units of the current block using the first prediction sample and offset.
[0100] In the method and device for encoding a video signal according to the present invention, the current block can include multiple sub-blocks, and it can be determined whether or not the offset is assigned to each sub-block.
[0102] In the method and device for encoding a video signal according to the present invention, it can be determined whether or not to assign the offset to a sub-block based on a position of the sub-block.
[0104] In the method and device for encoding a video signal according to the present invention, the current block can include multiple sub-blocks, and the offset can be assigned a different value to each sub-block.
[0106] In the method and device for encoding a video signal according to the present invention, the offset can be derived from a reference sample adjacent to the current block.
[0108] Mode for invention
[0109] Various modifications can be made to the present invention and there are various embodiments of the present invention, examples of which will be provided with reference to the drawings and described in detail. However, the present invention is not limited thereto, and exemplary embodiments may be considered to include all modifications, equivalents, or substitutes in a technical concept and technical scope of the present invention. Like reference numbers refer to the like element in the drawings described.
[0111] The terms used in the specification, 'first', 'second', etc., can be used to describe various components, but the components are not to be construed as being limited to these terms. The terms are to be used only to differentiate one component from other components. For example, the 'first' component may be named the 'second' component without departing from the scope of the present invention, and the 'second' component may also be similarly named the 'first' component. The term 'and / or' includes a combination of a plurality of elements and any one of a plurality of terms.
[0113] It will be understood that when an element is referred to simply as being 'connected to' or 'coupled to' another element without being 'directly connected to' or 'directly coupled to' another element in the present description, it may be 'directly connected to' or 'directly coupled to' another element or be connected to or coupled to another element, which has the other element intermediate between them. In contrast, it should be understood that when an element is referred to as being "directly coupled" or "directly connected" to another element, there are no intermediate elements present.
[0115] The terms used in the present specification are used merely to describe particular embodiments, and are not intended to limit the present invention. An expression used in the singular encompasses the expression in the plural, unless it clearly means differently in context. In the present specification, it is to be understood that terms such as "including", "having", etc., are intended to indicate the existence of the characteristics, numbers, steps, actions, elements, parts, or combinations. thereof disclosed in the specification, and are not intended to exclude the possibility that one or more other features, numbers, steps, actions, elements, parts, or combinations thereof may exist or be added.
[0117] Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. Hereinafter, the same constituent elements in the drawings are indicated by the same reference numerals, and repeated description will be omitted. of the same elements.
[0119] Figure 1 is a block diagram illustrating a device for encoding a video in accordance with an embodiment of the present invention.
[0121] Referring to Figure 1, the device for encoding a video 100 may include: a snapshot splitting module 110, prediction modules 120 and 125, a transform module 130, a quantization module 135, a reorganization module 160, an entropy encoding module 165, an inverse quantization module 140, an inverse transform module 145, a filter module 150, and a memory 155.
[0123] The constitutional parts shown in Figure 1 are shown independently to represent characteristic functions different from each other in the device for encoding a video. Therefore, it does not mean that each constitutional part is constituted as a constitutional unit of separate hardware or software. In other words, each constitutional part includes each constitutional parts listed for convenience. Therefore, at least two constitutional parts of each constitutional part can be combined to form a constitutional part or a constitutional part can be divided into a plurality of constitutional parts to perform each function. The embodiment where each constitutional part is combined and the embodiment where one constitutional part is divided are also included in the scope of the present invention, if they do not depart from the essence of the present invention.
[0125] Also, some of the constituents may not be indispensable constituents that perform essential functions of the present invention but rather be selective constituents that only enhance the performance thereof. The present invention can be implemented by including only the constitutional parts indispensable to implement the essence of the present invention except the constituents used in improving performance. The structure that includes only the indispensable constituents except the selective constituents used in enhancing performance only is also within the scope of the present invention.
[0127] The snapshot splitting module 110 can split an input snapshot into one or more processing units. At this point, the processing unit can be a prediction unit (PU), a transform unit (TU), or a coding unit (CU). The snapshot splitting module 110 can divide a snapshot into combinations of multiple coding units, prediction units, and transform units, and can code a snapshot by selecting a combination of coding units, units of prediction, and transform units with a predetermined criterion (for example, cost function).
[0129] For example, a snapshot can be divided into multiple encoding units. A recursive tree structure, such as a quad tree structure, can be used to divide a snapshot into encoding units. A coding unit that is divided into other coding units with a snapshot or a larger coding unit such as a root can be divided with child nodes that correspond to the number of divided coding units. An encoding unit that is no longer divided by a certain constraint serves as a leaf node. That is, when it is assumed that only square division is possible for one coding unit, one coding unit can be divided into a maximum of four other coding units.
[0131] Hereinafter, in the embodiment of the present invention, the encoding unit may mean a unit that performs encoding or a unit that performs decoding.
[0133] A prediction unit can be divided into at least one square or rectangular shape that is the same size in a single coding unit, or it can be divided so that a prediction unit divided into a single coding unit has a shape and / or a different size from another divided prediction unit.
[0135] When an intrapredicted prediction unit is generated based on one coding unit and the coding unit is not the smallest coding unit, intraprediction can be performed without dividing the coding unit into multiple NxN prediction units.
[0137] Prediction modules 120 and 125 may include an interprediction module 120 that performs interprediction and an intraprediction module 125 that performs intraprediction. Whether to perform interprediction or intraprediction for the prediction unit can be determined, and detailed information (eg, an intraprediction mode, a motion vector, a reference snapshot, etc.) can be determined according to each prediction procedure. At this point, the processing unit subjected to prediction may be different from the processing unit for which the prediction procedure and detailed content is determined. For example, the prediction procedure, prediction mode, etc., can be determined by the prediction unit, and the prediction can be carried out by the transformation unit. A residual value (residual block) between the generated prediction block and an original block can be input to the transformation module 130. Also, information on prediction mode, motion vector information, etc., used for prediction can be encoded with the residual value by entropy coding module 165 and can be transmitted to a device to decode a video. When using a particular encoding mode, it is possible to transmit to a video decoding device by encoding the original block intact without generating the prediction block through the prediction modules 120 and 125.
[0139] The interprediction module 120 can predict the prediction unit based on information from at least one of a previous snapshot or a later snapshot of the current snapshot, or it can predict the prediction unit based on information from some regions encoded in the current snapshot, in some cases. The interprediction module 120 may include an interpolation reference snapshot module, a motion prediction module, and a motion compensation module.
[0141] The reference snapshot interpolation module can receive reference snapshot information from memory 155 and can generate pixel information of a whole pixel or less than the whole pixel from the reference snapshot. In the case of luminance pixels, an 8-lead DCT-based interpolation filter having different filter coefficients can be used to generate pixel information of a whole pixel or less than a whole pixel in 1/4 pixel units. . In the case of chrominance signals, a 4-lead DCT-based interpolation filter having different filter coefficient can be used to generate pixel information of a whole pixel or less than a whole pixel in units of 1/8 of a pixel. .
[0143] The motion prediction module can perform motion prediction based on the interpolated reference snapshot by the interpolated reference snapshot module. As procedures for calculating a motion vector, various procedures can be used, such as full search based block matching algorithm (FBMA), a three-stage search (TSS), a new three-stage search algorithm (NTS), etc. The motion vector may have a motion vector value in units of 1/2 pixel or 1/4 pixel based on one interpolated pixel. The motion prediction module can predict a current prediction unit by changing the motion prediction procedure. As motion prediction procedures, various procedures can be used, such as a jump procedure, a join procedure, an AMVP (Advanced Motion Vector Prediction) procedure, an intra-block copy procedure, and so on.
[0144] The intra-prediction module 125 may generate a prediction unit based on reference pixel information that is neighboring a current block that is pixel information in the current snapshot. When the neighboring block of the current prediction unit is an interpredicted block and therefore a reference pixel is an interpredicted pixel, the reference pixel included in the interpredicted block can be replaced by reference pixel information from a neighboring block subjected to intraprediction. That is, when a reference pixel is not available, at least one reference pixel of available reference pixels may be used instead of reference pixel information not available.
[0146] Prediction modes in intraprediction can include a directional prediction mode that uses reference pixel information that depends on a prediction direction and a non-directional prediction mode that does not use directional information when making predictions. A mode for predicting luminance information may be different from a mode for predicting chrominance information, and for predicting chrominance information, intra-prediction mode information used to predict luminance information or predicted luminance signal information can be used.
[0148] When intrapredicting, when the size of the prediction unit is the same as the size of the transformation unit, you can intrapredicting on the prediction unit based on pixels on the left, top left and above the unit of prediction. However, when performing intraprediction, when the size of the prediction unit is different from the size of the transform unit, intraprediction can be performed using a reference pixel based on the transform unit. Also, intraprediction can be used using N x N division for only the smallest coding unit.
[0150] In the intra-prediction procedure, a prediction block can be generated after applying an AIS (Adaptive Intra Smoothing) filter to a reference pixel depending on the prediction modes. The type of the AIS filter applied to the reference pixel can vary. To perform the intraprediction procedure, an intraprediction mode of the current prediction unit can be predicted from the intraprediction mode of the prediction unit which is neighboring the current prediction unit. In predicting the prediction mode of the current prediction unit using predicted mode information from the neighboring prediction unit, when the intra-prediction mode of the current prediction unit is the same as the intra-prediction mode of the neighboring prediction unit , the information indicating that the prediction modes of the current prediction unit and the prediction unit are equal to each other may be transmitted using predetermined flag information. When the prediction mode of the current prediction unit is different from the prediction mode of the neighboring prediction unit, entropy coding can be performed to encode prediction mode information of the current block.
[0152] Also, a residual block may be generated that includes information about a residual value that is a different one between the prediction unit under prediction and the original prediction unit block based on prediction units generated by prediction modules 120 and 125. The generated residual block can be fed into the transformation module 130.
[0154] The transformation module 130 can transform the residual block that includes the information about the residual value between the original block and the prediction units generated by the prediction modules 120 and 125 using a transform procedure, such as discrete cosine transform (DCT ), discrete sine transform (DST) and KLT. Whether to apply DCT, DST or KLT to transform the residual block can be determined based on intra-prediction mode information from the prediction unit used to generate the residual block.
[0156] Quantization module 135 can quantize values transformed to a frequency domain by transform module 130. Quantization coefficients can vary depending on the block or importance of a snapshot. The values calculated by quantization module 135 can be provided to inverse quantization module 140 and rearrangement module 160.
[0158] The rearrangement module 160 may rearrange quantized residual value coefficients.
[0160] The reorganization module 160 can change a coefficient in the form of a two-dimensional block into a coefficient in the form of a one-dimensional vector through a coefficient scan procedure. For example, the rearrangement module 160 can scan from a DC coefficient to a coefficient in a high frequency domain using a zigzag scan procedure to change the coefficients to be in the form of one-dimensional vectors. Depending on the size of the transform unit and the intra-prediction mode, the vertical direction scan can be used where the coefficients in the form of two-dimensional blocks are scanned in the column direction or the horizontal direction scan where the coefficients in the form of two-dimensional blocks they are scanned in the row direction rather than the zigzag scan. That is, which scanning procedure among zigzag scanning, vertical direction scanning and horizontal direction scanning is used can be determined depending on the size of the transformation unit and the intra-prediction mode.
[0162] The entropy coding module 165 can perform entropy coding based on the values calculated by the reorganization module 160. The entropy coding can use various coding methods, for example, exponential Golomb coding, context-adaptive variable length coding ( CAVLC), and context adaptive arithmetic binary coding (CABAC).
[0164] The entropy coding module 165 can encode various information, such as residual value coefficient information and block type information of the coding unit, prediction mode information, division unit information, prediction unit information, transform unit information, motion vector information, reference frame information, block interpolation information, filtering information, etc., from reorganization module 160 and prediction modules 120 and 125.
[0166] Entropy encoding module 165 may entropy encode encoding unit coefficients input from reorganization module 160.
[0168] The inverse quantization module 140 can inverse quantize the values quantized by the quantization module 135 and the inverse transform module 145 can inverse transform the values transformed by the transform module 130. The residual value generated by the module of inverse quantization 140 and inverse transform module 145 can be combined with the prediction unit provided by a motion estimation module, a motion compensation module, and the intraprediction module of the prediction modules 120 and 125 so that it can generate a rebuilt block.
[0170] Filter module 150 may include at least one of an unblocking filter, a displacement correction unit, and an adaptive loop filter (ALF).
[0172] The unblocking filter can eliminate the block distortion that occurs due to the boundaries between the blocks in the reconstructed snapshot. To determine whether to unblock, the pixels included in various rows or columns in the block can be a basis for determining whether to apply the unblock filter to the current block. When the unblocking filter is applied to the block, a strong filter or a weak filter can be applied depending on the required unblocking filtration intensity. Also, by applying the deblocking filter, the filtration of the horizontal direction and the filtration of the vertical direction can be processed in parallel.
[0174] The offset correction module can correct offset with the original snapshot in units of one pixel in the snapshot being unlocked. To perform offset correction on a particular snapshot, it is possible to use an offset application procedure taking into account edge information of each pixel or a procedure of dividing pixels of a snapshot into the predetermined number of regions, determine a region to that is submitted to perform displacement, and apply the displacement to the determined region.
[0176] Adaptive Loop Filtering (ALF) can be performed based on the value obtained by comparing the filtered reconstructed snapshot and the original snapshot. The pixels included in the snapshot can be divided into predetermined groups, a filter to be applied to each of the groups can be determined, and the filtering can be done individually for each group. Information on whether to apply ALF and a luminance signal can be transmitted by encoding units (CU). The shape and filter coefficient of a filter for ALF can vary depending on each block. Also, the filter for ALF in the same way (fixed way) can be applied regardless of the characteristics of the target application block.
[0178] The memory 155 may store the reconstructed block or snapshot calculated through the filter module 150. The stored reconstructed block or snapshot can be provided to the prediction modules 120 and 125 when performing interprediction.
[0180] Figure 2 is a block diagram illustrating a device for decoding a video in accordance with an embodiment of the present invention.
[0182] Referring to Figure 2, the device for decoding a video 200 may include: an entropy decoding module 210, a rearrangement module 215, an inverse quantization module 220, an inverse transform module 225, prediction modules 230, and 235, a filter module 240, and a memory 245.
[0184] When a video bit stream is input from the device for encoding a video, the input bit stream can be decoded according to a reverse procedure of the device for encoding a video.
[0185] The entropy decoding module 210 can perform entropy decoding according to a reverse entropy encoding procedure by the entropy encoding module of the device for encoding a video. For example, in correspondence to the procedures performed by the device to encode a video, various procedures can be applied, such as exponential Golomb encoding, context-adaptive variable length encoding (CAVLC) and context-adaptive arithmetic binary encoding (CABAC).
[0187] The entropy decoding module 210 can decode information in intraprediction and interprediction made by the device to encode a video.
[0189] The reorganization module 215 can perform reorganization in the entropy decoded bit stream by the entropy decoding module 210 based on the reorganization procedure used in the device to encode a video. The reorganization module can reconstruct and rearrange the coefficients in the form of one-dimensional vectors for the coefficient in the form of two-dimensional blocks. The reorganization module 215 can perform reorganization by receiving information related to coefficient scanning performed in the device for encoding a video and can perform reorganization by a reverse scanning procedure of the coefficients based on the scanning order performed in the device for encoding. a video.
[0191] The inverse quantization module 220 can perform inverse quantization based on a quantization parameter received from the device for encoding a video and the reorganized coefficients of the block.
[0193] Inverse transform module 225 can perform inverse transform, reverse DCT, reverse DST and reverse KLT, which is the reverse procedure of the transform, that is, DCT, DST and KLT, performed by the transform module in the quantization result by the device to encode a video. The inverse transform can be performed based on the transform unit determined by the device for encoding a video. The inverse transform module 225 of the device for decoding a video can selectively perform transform schemes (e.g., DCT, DST, and KLT) depending on multiple pieces of information, such as prediction procedure, current block size, the prediction direction, etc.
[0195] Prediction modules 230 and 235 can generate a prediction block based on information about the prediction block generation received from the prediction module. entropy decoding 210 and previously decoded snapshot or block information received from memory 245.
[0197] As described above, as the operation of the device for encoding a video, when performing intraprediction, when the size of the prediction unit is the same as the size of the transform unit, intraprediction can be performed on the prediction unit based on pixels to the left, top left, and above the prediction unit. When performing intra-prediction, when the size of the prediction unit is different from the size of the transform unit, intra-prediction can be performed using a reference pixel based on the transform unit. Also, intraprediction can be used using N x N division for only the smallest coding unit.
[0199] Prediction modules 230 and 235 may include a prediction unit determination module, an interprediction module, and an intraprediction module. The prediction unit determination module may receive various information, such as prediction unit information, prediction mode information of an intra-prediction procedure, information about motion prediction of an interprediction procedure, etc., from the interprediction procedure. entropy decoding 210, can divide a current coding unit into prediction units, and can determine whether to perform interprediction or intraprediction in the prediction unit. Using information required in interprediction of the current prediction unit received from the device for encoding a video, the interprediction module 230 can perform interprediction on the current prediction unit based on information from at least one of a previous snapshot or a later snapshot of the current snapshot that includes the current prediction unit. Alternatively, interprediction can be performed based on information from some pre-reconstructed regions in the current snapshot including the current prediction unit.
[0201] To perform interprediction, it can be determined for the encoding unit which of a jump mode, a join mode, an AMVP mode, and an inter-block copy mode is used as the motion prediction procedure of the prediction unit. included in the coding unit.
[0203] Intraprediction module 235 may generate a prediction block based on pixel information in the current snapshot. When the prediction unit is a prediction unit subjected to intraprediction, intraprediction can be performed based on intraprediction mode information of the prediction unit received from the device for encoding a video. Intraprediction module 235 may include an intra-smoothing filter adaptive (AIS), a reference pixel interpolation module, and a DC filter. The AIS filter performs filtering on the reference pixel of the current block, and whether to apply the filter can be determined depending on the prediction mode of the current prediction unit. AIS filtering can be performed on the reference pixel of the current block using the prediction mode of the prediction unit and filter AIS information received from the device to encode a video. When the prediction mode of the current block is a mode where AIS filtering is not performed, the AIS filter may not be applied.
[0205] When the prediction mode of the prediction unit is a prediction mode in which intraprediction is performed based on the pixel value obtained by interpolating the reference pixel, the reference pixel interpolation module can interpolate the reference pixel to generate the reference pixel of a whole pixel or less than a whole pixel. When the prediction mode of the current prediction unit is a prediction mode in which a prediction block is generated without interpolation of the reference pixel, the reference pixel may not be interpolated. The DC filter can generate a prediction block through filtering when the prediction mode of the current block is a DC mode.
[0207] The rebuilt block or snapshot may be provided to the filter module 240. The filter module 240 may include the unblocking filter, the offset correction module, and the ALF.
[0209] Information on whether or not the unblocking filter is applied to the corresponding block or snapshot and information on which of a strong filter and a weak filter is applied when the unblocking filter is applied can be received from the device for encoding a video. The unblocking filter of the device for decoding a video can receive information about the unblocking filter from the device for encoding a video, and can perform unblocking filtering on the corresponding block.
[0211] The offset correction module can perform offset correction on the reconstructed snapshot based on the type of offset correction and offset value information applied to a snapshot when encoding.
[0213] The ALF can be applied to the encoding unit based on information on whether to apply the ALF, ALF coefficient information, etc., received from the device for encoding a video. The ALF information can be provided as being included in a particular set of parameters.
[0214] Memory 245 can store the reconstructed snapshot or block for use as a reference block or snapshot and can provide the reconstructed snapshot to an output module.
[0216] As described above, in the embodiment of the present invention, for convenience of explanation, the coding unit is used as a term representing a coding unit, but the coding unit may serve as a unit that performs decoding as well as coding.
[0218] Figure 3 is a view illustrating an example of hierarchical division of a coding block based on a tree structure according to an embodiment of the present invention.
[0220] An input video signal is decoded in predetermined block units. One such default unit for decoding the input video signal is an encoding block. The coding block can be a unit that performs intra / interprediction, transform and quantization. The coding block can be a square or non-square block that has an arbitrary size in the range of 8x8 to 64x64, or it can be a square or non-square block that has a size of 128x128, 256x256 or larger.
[0222] Specifically, the coding block can be hierarchically divided based on at least one of a quad tree and a binary tree. At this point, quadruple tree-based division can mean that a 2Nx2N coding block is divided into four NxN coding blocks, and binary tree-based division can mean that one coding block is divided into two coding blocks. The binary tree based division can be done symmetrically or asymmetrically. The split coding block based on the binary tree can be a square block or a non-square block, such as a rectangular shape. Binary tree based division can be done in a coding block where quadruple tree based division is no longer performed. Quadruple tree based splitting may no longer be performed on the split coding block based on the binary tree.
[0224] To implement adaptive division based on the quad tree or binary tree, information indicating division based on quadruple tree, information about the size / depth of the coding block that is allowed division based on quadruple tree, information indicating division based on quadruple tree can be used. binary tree, information about the size / depth of the coding block that is allowed division based on binary tree, information about the size / depth of the coding block that is not allowed binary tree based division, information on whether binary tree based division is performed in a vertical direction or a horizontal direction, etc.
[0226] As shown in Figure 3, the first coding block 300 with the division depth (division depth) of k can be divided into multiple second coding blocks based on the quadruple tree. For example, the second code blocks 310 to 340 can be square blocks that are half the width and half the height of the first code block, and the division depth of the second code block can be increased to k + 1.
[0228] The second coding block 310 with the dividing depth of k + 1 can be divided into multiple third coding blocks with the dividing depth of k + 2. The division of the second coding block 310 can be performed using selectively one of the quad tree and the binary tree depending on a division procedure. At this point, the division procedure can be determined based on at least one of the information indicating quadruple tree based division and the information indicating division based on binary tree.
[0230] When the second coding block 310 is divided based on the quadruple tree, the second coding block 310 can be divided into four third coding blocks 310a that are half the width and half the height of the second coding block, and the depth The division of the third coding block 310a can be increased to k + 2. In contrast, when the second coding block 310 is divided based on the binary tree, the second coding block 310 can be divided into two third coding blocks. At this point, each of the two third code blocks can be a non-square block having one half the width and half the height of the second code block, and the split depth can be increased to k + 2. The second coding block can be determined as a non-square block of a horizontal direction or a vertical direction depending on a division direction, and the division direction can be determined based on the information on whether binary tree-based division is performed in a vertical direction or a horizontal direction.
[0232] Meanwhile, the second coding block 310 can be determined as a leaf coding block that is no longer divided based on the quad tree or the binary tree. In this case, the leaf coding block can be used as a prediction block or a transform block.
[0233] The similar division of the second coding block 310, the third coding block 310a can be determined as a leaf coding block, or it can be further divided based on the quadruple tree or the binary tree.
[0235] Meanwhile, the third coding block 310b divided based on the binary tree can be further divided into coding blocks 310b-2 of a vertical direction or coding blocks 310b-3 of a horizontal direction based on the binary tree, and the depth of division of the relevant coding blocks can be increased to k + 3. Alternatively, the third code block 310b can be determined as a leaf code block 310b-1 that is no longer divided based on the binary tree. In this case, the coding block 310b-1 can be used as a prediction block or a transform block. However, the above division procedure can be performed in a limited way based on at least one of the coding block size / depth information that the quad tree based division is allowed, the coding block size / depth information is allowed. encoding that binary tree-based splitting is allowed, and information about the size / depth of the encoding block that binary tree-based splitting is not allowed.
[0237] Figure 4 is a view illustrating types of predefined intra-prediction modes for a device for encoding / decoding a video according to an embodiment of the present invention.
[0239] The device for encoding / decoding a video can perform intra-prediction using one of predefined intra-prediction modes. Predefined intraprediction modes for intraprediction may include non-directional prediction modes (eg, a planar mode, a CC mode) and 33 directional prediction modes.
[0241] Alternatively, to improve intra-prediction accuracy, a greater number of directional prediction modes can be used than the 33 prediction modes. That is, M extended directional prediction modes can be defined by subdividing angles from the directional prediction modes (M> 33), and a directional prediction mode having a predetermined angle can be derived using at least one of the 33 predefined directional prediction modes. .
[0243] Figure 4 shows an example of extended intraprediction modes, and extended intraprediction modes may include two non-directional prediction modes and 65 extended directional prediction modes. The same numbers of the extended intraprediction modes can be used for a luminance component and a chrominance component, or a different number of intraprediction modes can be used for each component. For example, 67 extended intraprediction modes can be used for the luminance component, and 35 intraprediction modes can be used for the chrominance component.
[0245] Alternatively, depending on the chrominance format, a different number of intraprediction modes can be used when performing intraprediction. For example, in the case of 4: 2: 0 format, 67 intraprediction modes can be used for the luminance component to perform intraprediction and 35 intraprediction modes can be used for the chrominance component. In the case of the 4: 4: 4 format, 67 intra-prediction modes can be used for both the luminance component and the chrominance component to perform intra-prediction.
[0247] Alternatively, depending on the size and / or shape of the block, a different number of intraprediction modes may be used to perform intraprediction. That is, depending on the size and / or shape of the PU or CU, 35 intraprediction modes or 67 intraprediction modes can be used to perform intraprediction. For example, when the CU or PU has the size less than 64x64 or is divided asymmetrically, 35 intraprediction modes can be used to perform intraprediction. When the size of the CU or PU is greater than or equal to 64x64, 67 intraprediction modes can be used to perform intraprediction. 65 directional intraprediction modes can be allowed for Intra_2Nx2N, and only 35 directional intraprediction modes can be allowed for Intra_NxN.
[0249] Figure 5 is a flow chart briefly illustrating an intraprediction procedure in accordance with one embodiment of the present invention.
[0251] Referring to Figure 5, an intra-prediction mode of the current block can be determined in step S500.
[0253] Specifically, the intraprediction mode of the current block can be derived based on a candidate list and an index. At this point, the candidate list contains multiple candidates, and the multiple candidates can be determined based on an intraprediction mode of the neighboring block adjacent to the current block. The neighboring block can include at least one of the blocks located at the top, the bottom, the left, the right and the corner of the current block. The index can specify one of multiple candidates from the candidate list. The candidate specified by the index can be assigned to the intraprediction mode of the current block.
[0255] An intraprediction mode used for intraprediction in the neighboring block can be assigned as a candidate. Also, an intraprediction mode having similar directionality to that of the neighboring block intraprediction mode can be assigned as a candidate. At this point, the intra-prediction mode having similar directionality can be determined by adding or subtracting a predetermined constant value to or from the neighboring block intra-prediction mode. The default constant value can be an integer, such as one, two, or greater.
[0257] The candidate list may additionally include a default mode. The default mode may include at least one of a planar mode, a CC mode, a portrait mode, and a landscape mode. The default mode can be added adaptively considering the maximum number of candidates that can be included in the candidate list of the current block.
[0259] The maximum number of candidates that can be included in the candidate list can be three, four, five, six or more. The maximum number of candidates that can be included in the candidate list can be a fixed value present in the device for encoding / decoding a video, or it can be variably determined based on a characteristic of the current block. The characteristic can mean the location / size / shape of the block, the number / type of intraprediction modes that the block can use, etc. Alternatively, the information indicating the maximum number of candidates that can be included in the candidate list can be signaled separately, and the maximum number of candidates that can be included in the candidate list can be variably determined using the information. Information indicating the maximum number of candidates can be signaled at at least one of a sequence level, a snapshot level, a cutoff level, and a block level.
[0261] When the extended intraprediction modes and the 35 predefined intraprediction modes are used selectively, the intraprediction modes of neighboring blocks can be transformed into indices that correspond to the extended intraprediction modes or into indices that correspond to the 35 intraprediction modes , by which candidates can be derived. To transform an index, a predefined table can be used, or a scaling operation based on a predetermined value. At this point, the predefined table can define a mapping relationship between different groups of intraprediction modes (eg extended intraprediction modes and 35 intraprediction modes).
[0263] For example, when the left neighbor block uses all 35 intraprediction modes and the left neighbor block intraprediction mode is 10 (a horizontal mode), it can be transformed into an index of 16 which corresponds to a horizontal mode in the extended intraprediction modes. .
[0264] Alternatively, when the top neighbor block uses the extended intraprediction modes and the neighbor block intraprediction mode has an index of 50 (a vertical mode), it can be transformed into an index of 26 which corresponds to a vertical mode in all 35 modes. intraprediction.
[0266] Based on the above-described method of determining the intraprediction mode, the intraprediction mode can be derived independently for each of the luminance component and the chrominance component, or the intraprediction mode of the chrominance component can be derived depending on the intraprediction mode of the luminance component.
[0268] Specifically, the intraprediction mode of the chrominance component can be determined based on the intraprediction mode of the luminance component as shown in the following table 1.
[0269] [Table 1]
[0274] In Table 1, intra_chroma_pred_mode means flagged information to specify the intraprediction mode of the chrominance component, and IntraPredModeY indicates the intraprediction mode of the luminance component.
[0276] Referring to Figure 5, a reference sample for intra-prediction of the current block can be derived in step S510.
[0278] Specifically, a reference sample for intraprediction can be derived based on a neighboring sample from the current block. The neighboring sample can be a reconstructed sample from the neighboring block, and the reconstructed sample can be a reconstructed sample before a loop filter is applied or a reconstructed sample after the loop filter is applied.
[0279] A neighbor sample reconstructed before the current block can be used as the reference sample, and a neighbor sample filtered based on a predetermined intra filter can be used as the reference sample. The intra filter may include at least one of the first intra filter applied to multiple neighboring samples located on the same horizontal line and the second intra filter applied to multiple neighboring samples located on the same vertical line. Depending on the positions of neighboring samples, one of the first intra filter and the second intra filter can be selectively applied, or both can be applied intra filters.
[0281] Filtering can be performed adaptively based on at least one of the intra-prediction mode of the current block and the transform block size for the current block. For example, when the intraprediction mode of the current block is CC mode, portrait mode, or landscape mode, no filtering can be performed. When the transform block size is NxM, no filtering can be performed. At this point, N and M can be the same or different values, or they can be values of 4, 8, 16, or more. Alternatively, filtering can be performed selectively based on the result of a predefined threshold comparison and the difference between the current block's intraprediction mode and the vertical mode (or the horizontal mode). For example, when the difference between the current block intraprediction mode and the vertical mode is greater than a threshold, filtering can be performed. The threshold can be defined for each transform block size as shown in Table 2.
[0283] [Table 2]
[0288] The intra filter can be determined as one of multiple predefined intra filter candidates in the device for encoding / decoding a video. For this purpose, an index may be signaled that specifies an intra filter of the current block among the multiple candidate intra filters. Alternatively, the intra filter can be determined based on at least one of the current block size / shape, transform block size / shape, information about filter intensity, and variations of neighboring samples.
[0290] Referring to Figure 5, intraprediction can be performed using the intraprediction mode of the current block and the reference sample in step S520.
[0291] That is, the prediction sample of the current block can be obtained using the intra-prediction mode determined in step S500 and the reference sample derived in step S510. However, in the case of intra-prediction, a neighboring block boundary sample may be used, and therefore the quality of the prediction snapshot may be reduced. Therefore, a correction procedure can be performed on the prediction sample generated through the prediction procedure described above, and will be described in detail with reference to Figures 6 to 15. However, the correction procedure is not limited to that only an interprediction sample applies, and can be applied to an interprediction sample or the reconstructed sample.
[0293] FIG. 6 is a view illustrating a method of correcting a prediction sample of a current block based on differential information from neighboring samples in accordance with an embodiment of the present invention.
[0295] The prediction sample of the current block can be corrected based on the differential information of multiple neighboring samples for the current block. The correction can be done on all prediction samples in the current block, or it can be done on prediction samples in some predetermined regions. Some regions can be one row / column or multiple rows / columns, or they can be preset regions for correction in the device to encode / decode a video, or they can be variably determined based on at least one of the size / shape of the current block and the intraprediction mode.
[0297] Neighbor samples can belong to neighboring blocks at the top, left, and upper left corner of the current block. The number of neighboring samples used for correction can be two, three, four or more. The neighboring sample positions can be variably determined depending on the prediction sample position that is the correction target in the current block. Alternatively, some of the neighboring samples may have positions fixed independently of the position of the prediction sample that is the correction target, and the remaining neighboring samples may have positions that are variably dependent on the position of the prediction sample that is the correction target.
[0299] Differential information from neighboring samples can mean a differential sample between neighboring samples, or it can mean a value obtained by scaling the differential sample by a predetermined constant value (eg, one, two, three, etc.). At this point, the predetermined constant value can be determined by considering the position of the prediction sample that is the correction target, the position of the column or row that includes the prediction sample that is the correction target, the position of the prediction sample within the column or row, and so on.
[0301] For example, when the intraprediction mode of the current block is vertical mode, differential samples between the upper left neighbor sample p (-1, -1) and neighboring samples p (-1, y) adjacent to the current block boundary can be used. to get the final prediction sample as shown in Formula 1. (y = 0 ... N-1)
[0303] [Formula 1]
[0308] For example, when the intraprediction mode of the current block is horizontal mode, differential samples between the upper left neighbor sample p (-1, -1) and neighboring samples p (x, -1) adjacent to the upper limit of the block can be used. current to get the final prediction sample as shown in Formula 2. (x = 0 ... N-1)
[0310] [Formula 2]
[0312] For example, when the intraprediction mode of the current block is vertical mode, differential samples can be used between the upper left neighbor sample p (-1, -1) and neighboring samples p (-1, y) adjacent to the left boundary of the block. current to get the final prediction sample. At this point, the differential sample can be added to the prediction sample, or the differential sample can be scaled by a predetermined constant value, and then added to the prediction sample. The default constant value used when scaling can be determined differently depending on the column and / or row. For example, the prediction sample can be corrected as shown in Formula 3 and Formula 4. (y = 0 ... N-1)
[0314] [Formula 3]
[0316] [Formula 4]
[0319] For example, when the intraprediction mode of the current block is horizontal mode, differential samples between the left neighbor sample p (-1, -1) and neighboring samples p (x, -1) adjacent to the upper limit of the current block can be used. to obtain the final prediction sample, as described in the case of portrait mode. For example, the prediction sample can be corrected as shown in Formula 5 and Formula 6. (x = 0 .... N-1)
[0321] [Formula 5]
[0323] [Formula 6]
[0325] Figures 7 and 8 are views illustrating a method of correcting a prediction sample based on a predetermined correction filter in accordance with an embodiment of the present invention.
[0327] The prediction sample can be corrected based on the neighboring sample of the prediction sample that is the correction target and a predetermined correction filter. At this point, the neighboring sample can be specified by an angular line of the directional prediction mode of the current block, or it can be at least one sample located on the same angular line as the prediction sample that is the correction target. Also, the neighboring sample can be a prediction sample in the current block, or it can be a reconstructed sample in a neighboring block reconstructed before the current block.
[0329] At least one of the number of taps, intensity, and a filter coefficient of the correction filter can be determined based on at least one of the position of the prediction sample that is the correction target, if the prediction sample that is the target of correction is located or not on the boundary of the current block, the intraprediction mode of the current block, angle of the directional prediction mode, the prediction mode (inter or intra mode) of the neighboring block, and the size / shape of the current block.
[0331] Referring to Figure 7, when the directional prediction mode has an index of 2 or 34, at least one prediction / reconstructed sample located in the lower left of the prediction sample that is the correction target and the filter for Default correction can be used to obtain the final prediction sample. At this point, the prediction / reconstructed sample in the lower left may belong to a previous line of a line that includes the prediction sample that is the correction target. The prediction / reconstructed sample in the lower left can belong to the same block as the current sample, or neighboring the block adjacent to the current block.
[0333] Filtering for the prediction sample can be done on the line at the block boundary only, or it can be done on multiple lines. The correction filter can be used where at least one of the number of filter taps and a filter coefficient is different from each of the lines. For example, a filter (1/2, 1/2) can be used for the first left line closest to the block boundary, a filter (12/16, 4/16) can be used for the second line, a filter can be used (14/16, 2/16) for the third line and a filter (15/16, 1/16) can be used for the fourth line.
[0335] Alternatively, when the directional prediction mode has an index of 3 to 6 or 30 to 33, filtering can be done at the block boundary as shown in Figure 8, and a 3-lead correction filter can be used to correct the prediction sample. Filtering can be performed using the lower left sample of the prediction sample that is the correction target, the lower left sample of the sample, and a 3-lead correction filter that takes as Input the prediction sample that is the correction target. The position of the neighboring sample used by the correction filter can be determined differently based on the directional prediction mode. The filter coefficient of the correction filter can be determined differently depending on the directional prediction mode.
[0337] Different correction filters can be applied depending on whether the neighboring block is encoded in the intermode or the intramode. When the neighboring block is encoded in the intramode, a filtering procedure can be used where more weight is given to the prediction sample, compared to when the neighboring block is encoded in the intermode. For example, in the case that the intraprediction mode is 34, when the neighboring block is encoded in the intermode, a filter (1/2, 1/2) can be used, and when the neighboring block is encoded in the intra mode, a filter (4/16, 12/16) can be used.
[0339] The number of lines to be filtered in the current block may vary depending on the size / shape of the current block (for example, the coding block or the prediction block). For example, when the current block size is equal to or less than 32x32, filtering can be performed on only one line at the block boundary; otherwise, filtering can be performed on multiple lines including the line at the block boundary.
[0340] Figures 7 and 8 are based on the case where the 35 intraprediction modes are used in Figure 4, but can be applied in the same / similar way to the case where the extended intraprediction modes are used.
[0342] Figure 9 is a view illustrating a method of correcting a prediction sample using weight and displacement in accordance with one embodiment of the present invention.
[0344] There may be a case where the encoding is not performed in intraprediction or in interprediction, even though the current block is similar to a collocated block from the previous frame since brightness changes take place between the previous frame and the current frame or that the The quality of the prediction snapshot encoded in intraprediction or in interprediction can be relatively low. In this case, weighting and compensation for brightness compensation can be applied to the prediction sample so that the quality of the prediction snapshot can be improved.
[0346] Referring to Figure 9, at least one of the weight w and displacement f can be determined in step S900.
[0348] At least one of the weight w and offset f may be signaled in at least one of a set of sequence parameters, a set of snapshot parameters, and a cut header. Alternatively, at least one of the weight w and displacement f may be signaled in predetermined block units that share the same, and multiple blocks (e.g., CU, PU, and TU) belonging to a predetermined block unit may share a weight w and / or displacement f marked.
[0350] At least one of the weight w and offset f may be signaled regardless of the prediction mode of the current block, or it may be signaled selectively by considering the prediction mode. For example, when the prediction mode of the current block is intermode, the weight w and / or displacement f can be signaled; otherwise, it may not be signposted. At this point, the intermode may include at least one of the jump mode, the join mode, the AMVP mode, and the current snapshot reference mode. The current snapshot reference mode can mean a prediction mode that uses a pre-reconstructed region in the current snapshot that includes the current block. A motion vector for the current snapshot reference mode can be used to specify the pre-reconstructed region. A flag or index may be signaled indicating whether the current block is encoded in the current snapshot reference mode, or it may be derived via a reference snapshot index of the current block. The current snapshot for snapshot reference mode current can exist in a fixed position (for example, the position with refIdx = 0 or the last position) in the list of reference snapshots of the current block. Alternatively, the current snapshot can be variably placed in the reference snapshot list, and for this purpose, a separate reference snapshot index indicating the position of the current snapshot can be signaled.
[0352] The weight can be derived using the change in brightness between the first template in a particular shape adjacent to the current block and the second template corresponding to it adjacent to the previous block. The second template may include a sample not available. In this case, an available sample can be copied to the position of the unavailable sample, or the available sample can be used which is derived through interpolation between multiple available samples. At this point, the available sample can be included in the second template or in the neighboring block. At least one of the coefficient, the shape and the number of taps of the filter used in the interpolation can be variably determined based on the size and / or shape of the template. A template composition procedure will be described in detail with reference to Figures 10 to 15.
[0354] For example, when the neighboring sample of the current block is designated by yi (i ranging from 0 to N-1) and the neighboring sample of the co-located block is designated by xi (i ranging from 0 to N-1), the weight w and the displacement f can be derived as follows.
[0356] Using a particularly shaped template adjacent to the current block, the weight w and the displacement f can be derived to obtain the minimum value of E (w, f) in Formula 7.
[0358] [Formula 7]
[0363] Formula 7 to get the minimum value can be changed to formula 8.
[0365] [Formula 8]
[0367] Formula 9 to derive the weight w and formula 10 to derive the displacement f can be obtained from formula 8.
[0369] [Formula 9]
[0371] [Formula 10]
[0373] Referring to Figure 9, at least one of the weight and displacement determined in step S900 can be used to correct the prediction sample.
[0375] For example, when brightness change occurs over full frames, the weight w and displacement f are applied to the prediction sample p generated through intraprediction so that a corrected prediction sample p 'can be obtained as shown in Formula eleven.
[0377] [Formula 11]
[0379] At this point, the weight w and displacement f can be applied to the generated prediction sample via interprediction, or they can be applied to the reconstructed sample.
[0381] Figures 10 to 15 are views illustrating a method of compounding a template for determining weight w in accordance with one embodiment of the present invention.
[0383] Referring to the left of Figure 10, a template may be composed of all neighboring samples adjacent to the current block, or a template may be composed of some samples subsampled from neighboring samples adjacent to the current block. The middle part of Figure 10 shows an example of 1/2 subsampling, and a template may be composed of only gray samples. Instead of 1/2 subsampling, the template can be composed using 1/4 subsampling or 1/8 subsampling. As shown to the right of Figure 10, a template can be composed of all neighboring samples adjacent to the current block except for the sample located in the upper left. Not shown in Figure 10, considering the position of the current block in the snapshot or a coding tree block (larger coding unit), a template made up of only the left hand samples or a template made up of only the samples at the top.
[0384] Referring to Figure 11, the template can be composed by increasing the number of neighboring samples. That is, the template in Figure 11 may be composed of the first neighboring samples adjacent to the current block boundary and the second neighboring samples adjacent to the first neighboring samples.
[0386] As shown to the left of Figure 11, a template can be composed of all neighboring samples that belong to two lines adjacent to the current block boundary, or as shown in the middle of Figure 11, a template can be composed sub-sampling the template on the left. As shown to the right of Figure 11, a template can be composed excluding four samples that belong to the upper left. Not shown in Figure 11, considering the position of the current block in the snapshot or a coding tree block (larger coding unit), a template composed of only the left-hand samples or a template composed of only the samples at the top.
[0388] Alternatively, different templates can be composed depending on the size and / or shape of the current block (if the current block has a square shape or if the current block is divided symmetrically). For example, as shown in Figure 12, a template subsampling rate can be applied differently depending on the current block size. For example, as shown on the left of Figure 12, when the block size is less than or equal to 64x64, a 1/2 subsampled template may be composed. As shown to the right of Figure 12, when the block size is greater than or equal to 128x128, a 1/4 subsampled template can be composed.
[0390] Referring to Figure 13, the template may be composed by increasing the number of neighboring samples adjacent to the current block depending on the size of the block.
[0392] Multiple template candidates can be determined that can be used in a sequence or cut, and one of the multiple template candidates can be used selectively. The multiple template candidates can include templates that are different in shape and / or size from each other. Information on the shape and / or size of the template may be signaled in a sequence header or break header. In the device for encoding / decoding a video, an index can be assigned to each template candidate. To identify a template candidate to be used in the current sequence, snapshot, or slice among multiple template candidates, the type_weight_pred_template_idx syntax can be encoded. The device for decoding a video can selectively use the template candidate based on the type_weight_pred_template_idx syntax.
[0393] For example, as shown in Figure 14, the template in the middle of Figure 10 can be assigned to 0, the template on the right of Figure 10 can be assigned to 1, the template in the middle of Figure 11 can be assigned to 2, and the template to the right of Figure 11 can be assigned to 3. The template used in the sequence can be flagged.
[0395] When weighted prediction is performed using a non-square block, the template can be composed by applying different subsampling rates to the long and short sides so that the total number of templates is 2AN. For example, as shown in Figure 15, the template can be made up by performing 1/2 undersampling on the short side and 1/4 undersampling on the long side.
[0397] When intraprediction is performed on the current block based on the direction intraprediction mode, the generated prediction sample may not reflect the characteristic of the original snapshot since the range of the reference sample being used is limited (for example, performs intraprediction only using neighboring samples adjacent to the current block). For example, when there is an edge in the current block or when a new object appears around the boundary of the current block, the difference between the prediction sample and the original snapshot can be large depending on the position of the prediction sample in the block. current.
[0399] In this case, the residual value is relatively large, and therefore the number of bits to be encoded / decoded may increase. In particular, the residual value in a region relatively far from the current block boundary may include a large number of high-frequency components, which may result in degradation of encoding / decoding efficiency.
[0401] To solve the above problems, a method of generating or updating the prediction sample in sub-block units can be used. Accordingly, the prediction accuracy can be improved in a region relatively far from the block boundary.
[0403] For the sake of explanation, in the following embodiments, a prediction sample generated based on the directional intraprediction mode is referred to as the first prediction sample. Also, a prediction sample generated based on a non-directional intraprediction mode or a prediction sample generated by performing interprediction can also be included in a category of the first prediction sample.
[0404] A method of correcting the prediction sample based on the offset will be described in detail with reference to Figure 16.
[0406] FIG. 16 is a view illustrating a method of correcting a prediction sample based on offset in accordance with an embodiment of the present invention.
[0408] Referring to Figure 16, for the current block, whether updating the first prediction sample using offset can be determined in step S1600. Whether to update the first prediction sample using offset can be determined by a decoded flag of a bit stream. For example, the syntax 'is_sub_block_refinement_flag' that indicates whether updating the first prediction sample using offset can be signaled through a bit stream. When the value of is_sub_block_refinement_flag is one, the update procedure of the first prediction sample using offset can be used in the current block. When the value of is_sub_block_refinement_flag is zero, the update procedure of the first prediction sample using offset cannot be used in the current block. However, step S1600 is intended to selectively update the first prediction sample, and is not an essential configuration to achieve the purpose of the present invention, so step S1600 may be omitted in some cases.
[0410] When it is determined that the update procedure of the first prediction sample using the offset is used, an intra-prediction pattern of the current block can be determined in step S1610. Through the intraprediction pattern, all or some regions of the current block to which the offset is applied, the way to divide the current block, whether to apply the offset to a sub-block including the current block, the size / sign of the offset can be determined assigned to each sub-block, etc.
[0412] One of multiple predefined patterns in the device for encoding / decoding a video can be selectively used as the intraprediction pattern of the current block, and for this purpose, an index specifying the intraprediction pattern of the current block can be signaled from a bit stream. As another example, the intraprediction pattern of the current block can be determined based on the division mode of the prediction unit or the coding unit of the current block, the size / shape of the block, if it is in the directional intraprediction mode, the directional intraprediction mode angle, etc.
[0413] Whether or not the index indicating the intraprediction pattern of the current block is signaled can be determined by predetermined flag information signaled from a bit stream. For example, when the flag information indicates that the index indicating the intraprediction pattern of the current block is signaled from a bit stream, the intraprediction pattern of the current block can be determined based on an index decoded from a bit stream. At this point, the flag information can be signaled at at least one of a snapshot level, a cutoff level, and a block level.
[0415] When the flag information indicates that the index indicating the intraprediction pattern of the current block is not signaled from a bit stream, the intraprediction pattern of the current block can be determined based on the division mode of the prediction unit or the unit encoding of the current block, etc. For example, the pattern in which the current block is divided into sub-blocks may be the same as the pattern in which the coding block is divided into prediction units.
[0417] When determining the intra-prediction pattern of the current block, the offset can be obtained in sub-block units at step S1620. The offset can be signaled in units of a slice, a coding unit, or a prediction unit. As another example, the offset can be derived from the neighboring sample of the current block. The offset may include at least one of offset value information and offset sign information. At this point, the offset value information can be in a range of integers greater than or equal to zero.
[0419] When the offset is determined, the second prediction sample can be obtained for each sub-block in step S1630. The second prediction sample can be obtained by applying the offset to the first prediction sample. For example, the second prediction sample can be obtained by adding or subtracting the offset to or from the first prediction sample.
[0421] Figures 17 to 21 are views illustrating examples of an intraprediction pattern of a current block in accordance with one embodiment of the present invention.
[0423] For example, in the example shown in Figure 17, when the index is '0' or '1', the current block can be divided into upper and lower sub-blocks. The offset may not be assigned to the upper sub-block, and the 'f' offset may be assigned to the lower sub-block. Therefore, the first prediction sample (P (i, j)) can be used intact in the upper sub-block, and second prediction sample (P (i, j) + fo P (ij) -f) that is generated by adding or subtracting the offset to or from the first prediction sample can be used in the lower sub-block. In the present invention, 'not set' can mean that the offset is not assigned to the block, or the offset having the value of '0' can be assigned to the block.
[0425] When the index is '2' or '3', the current block is divided into the left and right sub-blocks. The offset may not be assigned to the left sub-block, and the 'f' offset may be assigned to the right sub-block. Therefore, the first prediction sample (P (ij)) can be used intact in the left sub-block, and the second prediction sample (P (i, j) + fo P (ij) -f) that is generated adding or subtracting the offset to or from the first prediction sample can be used in the right sub-block.
[0427] The range of available intraprediction patterns can be limited based on the intraprediction mode of the current block. For example, when the intraprediction mode of the current block is a vertical direction intraprediction mode or a prediction mode in a direction similar to the vertical direction intraprediction mode (for example, among the 33 directional prediction modes, when the mode of intraprediction has an index of 22 to 30), only the intraprediction pattern that divides the current block in a horizontal direction (for example, index 0 or index 1 in Figure 17) can be applied to the current block.
[0429] As another example, when the current block intraprediction mode is a horizontal direction intraprediction mode or a prediction mode in one direction similar to the horizontal direction intraprediction mode (for example, among the 33 directional prediction modes, when the intraprediction mode has an index from 6 to 14), only the intraprediction pattern that divides the current block in a vertical direction (for example, index 2 or index 3 in Figure 17) can be applied to the current block.
[0431] In Figure 17, the offset is not assigned to one of the sub-blocks included in the current block, but the offset is assigned to another. Whether to assign the offset to the sub-block can be determined based on information signaled for each sub-block.
[0433] Whether to assign the offset to the sub-block can be determined based on the position of the sub-block, an index to identify the sub-block in the current block, and so on. For example, based on a predetermined boundary of the current block, the offset may not be assigned to the sub-block that is adjacent to the predetermined boundary, and the offset may be assigned to the sub-block that is not adjacent to the predetermined boundary.
[0434] When the default limit is assumed to be the upper limit of the current block, under the intra-prediction pattern that corresponds to index '0' or '1', the offset may not be assigned to the sub-block that is adjacent to the upper limit of the current block, and the offset can be assigned to the sub-block that is not adjacent to the upper limit of the current block.
[0436] When the default boundary is assumed to be the left boundary of the current block, under the intraprediction pattern that corresponds to index '2' or '3', the offset may not be assigned to the sub-block that is adjacent to the left boundary of the current block, and the offset can be assigned to the sub-block that is not adjacent to the left boundary of the current block.
[0438] In Figure 17, it is assumed that the offset is not assigned to that of the sub-blocks included in the current block and the offset is assigned to another. As another example, different offset values can be assigned to the sub-blocks included in the current block.
[0440] An example of where different offsets are assigned for each sub-block will be described with reference to Figure 18.
[0442] Referring to Figure 18, when the index is '0' or '1', the offset 'h' can be assigned to the upper sub-block of the current block, and the offset 'f' can be assigned to the lower sub-block of the current block. Therefore, the second prediction sample (P (i, j) + h or P (i, j) -h) obtained by adding or subtracting the offset 'h' ao from the first prediction sample can be generated in the upper sub-block, and the second prediction sample (P (i, j) + fo P (i, j) -f) obtained by adding or subtracting the 'fao offset from the first prediction sample can be generated in the lower sub-block.
[0444] Referring to Figure 18, when the index is '2' or '3', the offset 'h' can be assigned to the left sub-block of the current block, and the offset 'f' can be assigned to the right sub-block of the current block. Therefore, the second prediction sample (P (i, j) + h or P (i, j) -h) obtained by adding or subtracting the offset 'h' ao from the first prediction sample can be generated in the left sub-block, and the second prediction sample (P (i, j) + fo P (i, j) -f) obtained by adding or subtracting the offset 'f' ao from the first prediction sample can be generated in the right sub-block.
[0446] In Figures 17 and 18, the current block is divided into two sub-blocks that have the same size, but the number of sub-blocks and / or the size of the sub-blocks included in the current block is not limited to the examples shown in Figures 17 and 18. The number of sub-blocks included in the current block can be three or more, and the sub-blocks can be of different sizes.
[0448] When multiple intraprediction patterns are available, the available intraprediction patterns can be grouped into multiple categories. In this case, the intraprediction pattern of the current block can be selected based on the first index to identify a category and the second index that identifies an intraprediction pattern in the category.
[0450] An example where the intraprediction pattern of the current block is determined based on the first index and the second index will be described with reference to Figure 19.
[0452] In the example shown in Figure 19, 12 intraprediction patterns can be classified into three categories each including four intraprediction patterns. For example, intraprediction patterns that correspond to indices 0 to 3 can be classified as a category 0, intraprediction patterns that correspond to indices 4 to 7 can be classified as a category 1, and intraprediction patterns that correspond to Indices 8 to 11 can be classified as Category 2.
[0454] The device for decoding a video can decode the first index from a bit stream to specify the category that includes at least one intra-prediction pattern. In the example shown in Figure 19, the first index can specify one of the categories 0, 1, and 2.
[0456] When the category is specified based on the first index, the intraprediction pattern of the current block can be determined based on the second index decoded from a bit stream. When category 1 is specified by the first index, the second index can specify one of the four intraprediction patterns (that is, from index 4 to index 7) of category 1.
[0458] In Figure 19, the categories include the same numbers of intraprediction patterns, but the categories do not necessarily include the same numbers of intraprediction patterns.
[0460] The number of available intraprediction patterns or the number of categories can be determined in units of a sequence or a cutoff. Also, at least one of the number of available intraprediction patterns and the number of categories can be signaled through a sequence header or a break header.
[0461] As another example, the number of available intraprediction patterns and / or the number of categories can be determined based on the size of the prediction unit or the coding unit of the current block. For example, when the size of the current block (for example, the encoding unit of the current block) is greater than or equal to 64x64, the intraprediction pattern of the current block can be selected from five intraprediction patterns shown in Figure 20. In contrast, when the size of the current block (for example, the encoding unit of the current block) is less than 64x64, the intraprediction pattern of the current block can be selected from the intraprediction patterns shown in Figure 17, 18 or 19 .
[0463] In Figures 17-20, the sub-blocks included in each intraprediction pattern are in the rectangular shape. As another example, the intraprediction pattern can be used where at least one of the sizes and shapes of the sub-blocks are different from each other. For example, Figure 22 is a view illustrating an example of an intraprediction pattern with different sizes and shapes of subblocks.
[0465] The offset for each sub-block (eg, the offset h, f, g, or i of each sub-block shown in Figures 17 to 21) can be decoded from a bit stream, or it can be derived from the neighboring sample adjacent to the current block.
[0467] As another example, the sub-block offset can be determined by considering the distance from a sample at a particular position in the current block. For example, the offset can be determined in proportion to a value representing the distance between a sample at a predetermined position in the current block and a sample at a predetermined position in the sub-block.
[0469] As another example, the sub-block offset can be determined by adding or subtracting a certain value based on the distance between a sample at a predetermined position in the current block and a sample at a predetermined position in the sub-block to or from a preset value.
[0471] As another example, the offset can be determined based on a ratio of a value representing the size of the current block and a value representing the distance between a sample at a predetermined position in the current block and a sample at a predetermined position in the sub-block. .
[0472] At this point, the sample at the predetermined position in the current block can include a sample adjacent to the left boundary of the current block, a sample located at the upper boundary of the current block, a sample adjacent to the upper left corner of the current block, etc. .
[0474] Figure 21 is a view illustrating a method of performing prediction using an intra-block copy technique in accordance with one embodiment of the present invention.
[0476] Intra-block copy (IBC) is a procedure where the current block (hereinafter referred to as 'a reference block') is predicted / reconstructed using a block already reconstructed in the same snapshot as the current block. When a snapshot contains a large number of letters, such as Korean alphabet, an alphabet, etc., and the letters contained by rebuilding the current block are contained in an already decoded block, the intra-block copy can improve encoding performance / decoding.
[0478] An intra-block copy procedure can be classified as an intraprediction procedure or an interprediction procedure. When the intra-block copy procedure is classified as the intra-prediction procedure, an intra-prediction mode can be defined for the intra-block copy procedure. When the intra-block copy procedure is classified as the interprediction procedure, a bit stream may include a flag indicating whether to apply the intra-block copy procedure to the current block. Alternatively, if the current block uses intra-block copy it can be confirmed through a reference snapshot index of the current block. That is, when the reference snapshot index of the current block indicates the current snapshot, interprediction can be performed on the current block using intra-block copy. For this purpose, a pre-rebuilt current snapshot can be added to a reference snapshot list for the current block. The current snapshot can exist at a fixed position in the reference snapshot list (for example, a position with the reference snapshot index of 0 or the last position). Alternatively, the current snapshot may exist differently in the reference snapshot list, and for this purpose, a separate reference snapshot index may be signaled indicating the current snapshot's position.
[0480] To specify the reference block of the current block, the position difference between the current block and the reference block can be defined as a motion vector (hereinafter referred to as a block vector).
[0482] The block vector can be derived by a sum of a prediction block vector and a differential block vector. The device for encoding a video can generate the vector of prediction block through predictive coding, and can encode the differential block vector indicating the difference between the block vector and the prediction block vector. In this case, the device for decoding a video can derive the block vector of the current block using the prediction block vector derived using predecoded information and the differential block vector decoded from a bit stream.
[0484] At this point, the prediction block vector can be derived based on the block vector of the neighboring block adjacent to the current block, the block vector in an LCU of the current block, the block vector in a row / column of LCU of the block current, etc.
[0486] The device for encoding a video can encode the block vector without performing predictive encoding of the block vector. In this case, the device for decoding a video can obtain the block vector by decoding the block vector information signaled through a bit stream. The correction procedure can be performed on the prediction / reconstructed sample generated through the intra-block copy procedure. In this case, the correction procedure described with reference to Figures 6 to 21 can be applied in the same / similar manner, and therefore the detailed description thereof will be omitted.
[0488] The device for encoding a video can generate a bit stream by performing binary conversion on a symbol, such as a transform coefficient, a motion vector difference, a syntax on a slice, etc., and performing arithmetic encoding on binary values. . At this point, for symbol compression, a context can be determined by considering the value of the same symbol of the neighboring block, information about the neighboring block, the position of the current block, etc. When a probability index is determined based on the selected context, the probability of occurrence of the symbol can be determined based on the determined probability index. The symbol compression performance can then be improved through cumulative statistics of internal symbols, the probability of occurrence recalculated based on the value of the encoded symbol, and arithmetic encoding. As an example of the arithmetic coding procedure, CABAC can be used.
[0490] An example of encoding a symbol in the device for encoding a video will be described in detail with reference to Figure 23. A detailed description of decoding the symbol in the device for decoding a video is omitted, but the decoding of the symbol can be performed by the device for decoding a video through the reverse procedure of the following embodiments.
[0492] Figure 23 is a flow chart illustrating a symbol encoding procedure.
[0493] The device for encoding a video may convert the symbol to binary in step S2300. When an encoding target symbol is not a binary symbol, the video encoding device can convert the symbol to a binary symbol. For example, the device for encoding a video can convert a non-binary symbol, such as a transform coefficient, a motion vector difference, etc., to a binary symbol consisting of the values of 0 and 1. When the symbol is converted to binary, among the mapped code words, the bit that has '0' or '1' can be referred to as a binary.
[0495] Conversion to symbol binary can be done through conversion to unary binary, conversion to truncated unary binary, etc.
[0497] Table 3 shows a procedure for converting to unary binary, and Table 4 shows a procedure for converting to truncated unary binary when the maximum bit length (cMax) is six.
[0499] [Table 3]
[0504] [Table 4]
[0509] When the conversion to symbol binary is complete, a model of the symbol is selected. context in step S2310. The context model represents a probability model for each symbol. The probability of occurrence of 0 or 1 in the binary may differ for each context model. In the following embodiments, the probability of occurrence of the symbol may indicate the probability of occurrence of 0 or 1 in the binary. In HEVC, there are approximately 400 independent contexts for various symbols.
[0511] When coding of the cut is started, the probability index (pStateIdx) for each context can be initialized based on at least one of the quantization parameter (Qp) and the type of cut (I, P or B).
[0513] In the case of using a part, when the coding of the part is started, the probability index for each context can be initialized based on at least one of the quantization parameter (Qp) and the type of cut (I, P or B) .
[0515] Then, based on the selected context model, arithmetic coding can be performed for each symbol in step S2320. Arithmetic coding of the symbol can be done for each context model. Therefore, even with the same symbols, when using different contexts, it may not affect the bitstream encoding and probability update. When determining the probability of occurrence of the symbol, encoding can be performed depending on the probability of occurrence of the symbol and the value of each symbol. At this point, the number of encoding bits can be determined differently depending on the value of each symbol. That is, when the value of each symbol has a high probability of occurrence, the symbol can be compressed into a small number of bits. For example, when the value of each symbol has a high probability of occurrence, the symbol that has ten binaries can be encoded in less than ten bits.
[0517] The interval between [0,1) is divided into subintervals based on the probability of occurrence of the symbol, and between real numbers that belong to the divided subintervals, a number is selected that can be represented by the last number of bits and its coefficient , by which the symbol can be encoded. By dividing the interval by [0,1) into subintervals, when the probability of occurrence of the symbol is large, a long subinterval can be assigned, and when the probability of occurrence of the symbol is low, a small subinterval can be assigned.
[0519] Figure 24 is a view illustrating an example of dividing the interval by [0,1) into subintervals based on the probability of occurrence of a symbol. The arithmetic coding of the symbol '010' will be described when the probability of occurrence of 1 is 0.2 and the probability of occurrence of 0 is 0.8.
[0521] Since the first binary of the symbol '010' is '0' and the probability of occurrence of '0' is 0.8, the interval [0,1) can be updated to [0, 0.8).
[0523] Since the second binary of the symbol '010' is '1' and the probability of occurrence of '1' is 0.2, the interval [0, 0.8) can be updated to [0.64, 0.8).
[0525] Since the third binary of the symbol '010' is '0' and the probability of occurrence of '0' is 0.8, the interval [0.64, 0.8) can be updated to [0.64, 0.768).
[0527] In the range [0.64, 0.768), a number is selected that can be represented by the last number of bits. In the interval [0.64, 0.768), 0.75 = 1x (1/2) + 1x (1.2) A2 so that the symbol '010' can be encoded in the binary '11' excluding 0.
[0529] A most probable symbol (MPS) means a symbol that has a high frequency of occurrence between 0 and 1, and a less probable symbol (LPS) means a symbol that has a low frequency of occurrence between 0 and 1. The initial probability values The occurrence of the MPS and LPS can be determined based on the context and the quantization parameter value (Qp).
[0531] In Figure 24, for each binary, the probability of occurrence of 0 and 1 is assumed to be fixed, but the probability of occurrence of MPS and the probability of occurrence of LPS of the symbol can be updated depending on whether the current coded binary is the MPS or LPS.
[0533] For example, when a binary conversion value of the current encoding symbol binary is equal to the MPS, a probability value of MPS of the symbol may increase while a probability value of LPS may decrease. In contrast, when the binary conversion value of the current encoding symbol binary is equal to the LPS, the MPS probability value of the symbol can be lowered while the LPS probability value can be increased.
[0535] In CABAC, probabilities of occurrence of 64 MPS and probabilities of occurrence of LPS are defined, but a lower or higher number of probabilities of occurrence of MPS or probabilities of occurrence of LPS can be defined and used. The probabilities of occurrence of MPS and the probabilities of occurrence of LPS can be specified by an index (pStateldx) indicating the probability of occurrence of the symbol. When the value of the index indicating the probability of occurrence of the symbol is large, the probability of occurrence of MPS is high.
[0536] Table 5 is intended to explain an example of updating a probability index (pStateldx).
[0538] [Table 5]
[0540]
[0543] When MPS is encoded, a probability index (pStateIdx) indicating the probability value of the current context can be updated to an index corresponding to transIdxMPS. For example, when the value of pStateIdx is 16 and the MPS is encoded, pStateIdx can be updated to index 17 that corresponds to transIdxMPS. In contrast, when the value of pStateIdx is 16 and the LPS is encoded, pStateIdx can be updated to index 13 which corresponds to transIdxLPS. When pStateIdx is updated, the MPS and LPS occurrence probabilities can be updated.
[0545] When the value of pStateIdx is 0, the probability of occurrence of MPS is 0.5. In this state, when LPS is encoded, the frequency of LPS may increase compared to that of MPS. Therefore, when the value of pStateIdx is 0 and LPS is encoded, the MPS and LPS can exchange with each other.
[0547] The probability index value for each context can be initialized in units of a cut or a piece. Since the probability index is initialized in cutoff units, the cutoff Current can be decoded regardless of whether the previous cut or the previous frame is encoded. However, when the symbol is encoded using an initialized probability index, when the symbol is encoded using an initialized probability index, the probability specified by the initialized probability index does not properly reflect the actual probability of occurrence of the symbol, and therefore both the initial coding efficiency of the cut can be reduced.
[0549] To solve the problem, the probability index accumulated at a predetermined point during encoding / decoding of the previous cutoff can be assigned an initial value of the probability index of the current cutoff. At this point, the predetermined point may indicate an encoding / decoding start point of a block located at a particular position (eg, the middle position) in the slice in scan order. The probability value or probability index accumulated in the previous cutoff can be directly encoded / decoded via the current cutoff header, etc.
[0551] As another example, multiple probability indices can be assigned to a context to determine the initial probability index of the cutoff differently. For example, when there are multiple probability indices that have different values for an arbitrary context (ctx), one of the multiple probability indices can be determined as the initial probability index. At this point, the information for selecting one of the multiple probability indices can be signaled through the cutoff header, etc. For example, the device for decoding a video can select the probability index through information transmitted from the cutoff header, and can perform encoding using the selected probability index as the initial probability index.
[0553] As another example, multiple initial values (InitValue) can be assigned for a context to determine the initial probability index of the cutoff differently. When the initial value is selected, the variables m and n can be derived using the initial value and a preCtxState variable indicating the above context condition can be derived through the derived variable m and n. Based on the variable preCtxState indicating the previous context condition, the MPS and the initial value of the context probability index pStateIdx can be derived.
[0555] Table 7 shows a probability index derivation procedure based on the initial value.
[0557] [Table 7]
[0560] An index to specify an initial value (InitValue) to be used in the break can be signaled through the break header. The index for specifying an initial context value can be defined as a CABAC initialization index (cabac_init_idx). Based on a table defining a mapping relationship between at least two selected from a group of the CABAC initialization index, a context index (ctxIdx), and the initial value, an initial value corresponding to cabac_init_idx can be determined.
[0562] Also, a syntax indicating the number of available CABAC initialization indices can be signaled through the break header, the sequence header, the snapshot header, and so on. The syntax that indicates the number of available CABAC initialization indices can be defined as 'num_cabac_init_idx_minus1'.
[0564] Table 8 and Table 9 are graphs to explain an example of initial value determination based on the initialization index of CABAC. Table 8 shows the case where the number of available CABAC initialization indexes is five, and Table 9 shows the case where the number of available CABAC initialization indexes is six. Table 8 or Table 9 can be used selectively based on the value of num_cabac_init_minus1.
[0566] [Table 8
[0571] [Table 9
[0574] Explaining a 'cbfjuma' syntax as an example, the cbf_luma syntax contexts that indicate whether there is a non-zero transform coefficient in the luminance component transform block can have different initial values depending on cabac_init_idx. The probability index (pStateIdx) derived based on the initial value can also be determined differently depending on cabac_init_idx.
[0576] cabac_init_idx can indicate the offset to be applied to the probability index. For example, the probability index (pStateIdx) can be derived based on a quantization parameter (Qp) 'sliceQpY' of an arbitrary cutoff and an initial value (InitValue) determined for each context, and the offset to be applied to the probability index can determined based on the value of cabac_init_idx. When the offset is determined, the probability index can be recalculated based on the probability index and the offset. Therefore, even when the quantization parameters (Qp) of the cutoff are equal, the context model can have multiple probability indices (ie, multiple initial values of pStateIdx).
[0578] As another example, the initial value can be determined for each context, and the offset to be applied to the initial value can be determined based on the value of cabac_init_idx. The initial value is recalculated based on the determined offset, and the probability index can be derived based on the initial recalculated value.
[0580] In a particular symbol rather than in complete symbols, there are multiple probability indices for a context. For example, in a particular symbol, such as the transform coefficient, a residual motion vector (a motion vector difference), a reference snapshot index (a reference index), etc., there may be multiple indices of probability for a context.
[0582] Whether multiple initial values (InitValue) or multiple probability indices (pStateIdx) are applied to a context can be determined based on the type of cutoff or regardless of the type of cutoff. Also, the initial value may differ for each type of cut.
[0584] Figure 25 is a view illustrating an example of setting a probability index depending on a position of a block to be encoded.
[0586] The probability index can be determined depending on the spatial position or the scan order of the block to be encoded. For example, as shown in the example in Figure 25, different probability indices (pStateldx) can be assigned depending on the scan order in the slice. At this point, the probability index value (pStateldx) can be selected to be the same or similar to the probability index value (prevPstateIdx) of the region co-located in the previous cutoff.
[0588] A spatial region for initializing the probability index can be referred to as "a context initialization region." The context initialization region may be provided in a rectangular shape, but it is not limited thereto. Also, the context seed region can be assigned to have a preset size, but it is not limited thereto. Information for specifying the context initialization region can be signaled through the break header, etc.
[0590] Assuming that the context initialization region is in a rectangular shape, the unit by which the probability index is initialized can be determined based on a syntax 'num_row_ctu_minus1' indicating the number of rows of coding tree units included in the context initialization region. For example, when the value of ‘num_row_ctu_minus1’ is one, the region that includes the two-column CTUs can be assigned as a seed region as shown in the example in Figure 25.
[0592] Cutoff is a default unit that can independently perform entropy encoding / decoding. The cut is not necessarily provided in a rectangular shape. The slice can be divided into multiple slice segments, and the slice segment can be composed of multiple code tree units (CTU).
[0594] The part is the same as the cut in that it is composed of multiple coding tree units as the cut, but differs in that the part is provided in a rectangular shape. Entropy encoding / decoding can be done in units of pieces. When entropy encoding / decoding is performed in units of parts, there is an advantage that parallelization can be performed in which multiple parts are simultaneously encoded / decoded.
[0596] Figures 26 and 27 are views illustrating examples of part division and cut segment.
[0598] As shown in the examples in Figures 26 and 27, the part may include at least one cut segment and there may be a cut segment in a part.
[0600] An independent cut segment, and at least a few dependent cut segments make up a cut. As shown in the examples in Figures 26 and 27, the independent cut segment is not necessarily included in the part.
[0602] Not shown in drawings, multiple parts may exist in the cut, or one part may exist in one cut.
[0604] Figure 28 is a view illustrating an example of determining an initial probability index for each part differently.
[0606] When using parts, the context model is initialized in units of parts. Different initial values (InitValue) or different probability indices (pStateIdx) can be used depending on the position of the part. That is, even if the contexts are the same, different probability indices (pStateIdx) can be used depending on the pieces.
[0608] An index to specify the starting value of each part can be signaled through the cut segment header, etc. For example, when the initial value is specified through the syntax 'tile_cabac_init_idx' to specify the initial value of the piece, the probability index can be derived based on the specified initial value.
[0610] The probability index for each content of each piece can be derived based on the initial value or the probability index that corresponds to the context of the co-located piece from the previous frame. As another example, the probability index for each content in each piece can be derived based on an initial value selected from multiple initial values defined for respective contexts, or it can be determined as a probability index selected from multiple defined probability indices. for respective contexts. When defining multiple initial values or multiple probability indices for respective contexts, an index may be flagged to select the initial value or probability index for each piece.
[0612] In the example shown in Figure 28, for a symbol related to residual motion information (the motion vector difference), in the first piece (pieceO), the initial probability index is determined as pStateldxO, and in the second time (piezal), the initial probability index is determined as pStateldxl.
[0613] Industrial applicability
[0614] The present invention can be used in encoding / decoding a video signal.
权利要求:
Claims (7)
[1]
1. A method to decode a video, comprising the method:
generating prediction samples based on an intraprediction mode of a current block; Y
determine whether or not to apply an update process to the prediction samples of the current block,
wherein, when determining whether to apply the update process to the current block, the prediction samples in the current block are updated based on their respective offset,
wherein, in a first sub-region in the current block, an offset is determined based on a reference sample adjacent to the current block and,
wherein, a second sub-region in the current block is assigned an offset equal to zero.
[2]
2. The method of claim 1, wherein a pattern formed by the first sub-region and
the second sub-region is different when the intraprediction mode of the current block is a
non-directional mode and when the intraprediction mode of the current block is a mode
directional.
[3]
The method of claim 2, wherein the pattern is one of the following: a horizontal pattern in which the first sub-region and the second sub-region are distinguished by a horizontal line, a vertical pattern in which the The first sub-region and the second sub-region are distinguished by a vertical line, and a polygonal pattern in which one of the first or second sub-regions has a polygonal shape, while the other has a rectangular shape.
[4]
4. A method to encode a video, comprising the method:
generating prediction samples based on an intraprediction mode of a current block; Y
determine whether or not to apply an update process to the prediction samples of the current block,
wherein, when determining whether to apply the update process to the current block, the prediction samples in the current block are updated based on their respective offset,
wherein, in a first sub-region in the current block, an offset is determined based on a reference sample adjacent to the current block and,
in which, a second sub-region in the current block is assigned an offset equal to zero.
[5]
The method of claim 4, wherein a pattern formed by the first sub-region and the second sub-region is different when the intra-prediction mode of the current block is a non-directional mode and when the intra-prediction mode of the current block it is a directional mode.
[6]
The method of claim 5, wherein the pattern is one of the following: a horizontal pattern in which the first sub-region and the second sub-region are distinguished by a horizontal line, a vertical pattern in which the The first sub-region and the second sub-region are distinguished by a vertical line, and a polygonal pattern in which one of the first or second sub-regions has a polygonal shape, while the other has a rectangular shape.
[7]
7. A non-transient, computer-readable recording medium comprising a bit stream of a video signal, the bit stream of a video signal being encoded by an encoding method comprising:
generating prediction samples based on an intraprediction mode of a current block; Y
determine whether or not to apply an update process to the prediction samples of the current block,
wherein, when determining whether to apply the update process to the current block, the prediction samples in the current block are updated based on their respective offset,
wherein, in a first sub-region in the current block, an offset is determined based on a reference sample adjacent to the current block and,
wherein, a second sub-region in the current block is assigned an offset equal to zero.
类似技术:
公开号 | 公开日 | 专利标题
ES2736374B1|2021-03-05|Procedure and device to perform intraprediction during the encoding or decoding of a video
ES2737841B1|2021-07-27|Method and apparatus for processing video signals
ES2739668B1|2021-12-03|METHOD AND APPARATUS FOR PROCESSING VIDEO SIGNALS
ES2703607B2|2021-05-13|Method and apparatus for processing video signals
ES2699723B2|2020-10-16|METHOD AND APPARATUS TO TREAT A VIDEO SIGNAL
ES2677193B1|2019-06-19|Procedure and device to process video signals
ES2688624A2|2018-11-05|Method and apparatus for processing video signal
ES2737874B2|2020-10-16|METHOD AND APPARATUS FOR PROCESSING VIDEO SIGNAL
ES2784198T3|2020-09-23|Method for block partition and decoding device
ES2699691A2|2019-02-12|Video signal processing method and device
ES2711474A2|2019-05-03|Method and device for processing video signal
ES2737843B2|2021-07-15|METHOD AND APPARATUS TO PROCESS A VIDEO SIGNAL
ES2737845B2|2021-05-19|METHOD AND APPARATUS TO PROCESS VIDEO SIGNAL
ES2711223A2|2019-04-30|Method and device for processing video signal
ES2711230A2|2019-04-30|Method and apparatus for processing video signal
ES2711473A2|2019-05-03|Method and apparatus for processing video signal
ES2711209A2|2019-04-30|Method and device for processing video signal
同族专利:
公开号 | 公开日
ES2736374A2|2019-12-30|
KR20170031643A|2017-03-21|
GB2596496A|2021-12-29|
CA2998098A1|2017-03-16|
WO2017043949A1|2017-03-16|
ES2736374B1|2021-03-05|
GB202114724D0|2021-12-01|
GB2557544A|2018-06-20|
CN108353164A|2018-07-31|
US20180255295A1|2018-09-06|
ES2710234B1|2020-03-09|
EP3349445A1|2018-07-18|
ES2736374R1|2020-02-28|
EP3349445A4|2019-03-20|
US10554969B2|2020-02-04|
ES2710234A2|2019-04-23|
US20200120338A1|2020-04-16|
GB201805988D0|2018-05-23|
ES2710234R1|2019-05-29|
ES2844525R1|2021-10-08|
GB2557544B|2021-12-01|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

EP2232877B1|2008-01-10|2018-08-08|Thomson Licensing DTV|Methods and apparatus for illumination compensation of intra-predicted video|
US10455248B2|2008-10-06|2019-10-22|Lg Electronics Inc.|Method and an apparatus for processing a video signal|
US9113168B2|2009-05-12|2015-08-18|Lg Electronics Inc.|Method and apparatus of processing a video signal|
KR101742992B1|2009-05-12|2017-06-02|엘지전자 주식회사|A method and an apparatus for processing a video signal|
US9094658B2|2010-05-10|2015-07-28|Mediatek Inc.|Method and apparatus of adaptive loop filtering|
WO2011142801A1|2010-05-10|2011-11-17|Thomson Licensing|Methods and apparatus for adaptive interpolative intra block encoding and decoding|
US9456111B2|2010-06-15|2016-09-27|Mediatek Inc.|System and method for content adaptive clipping|
US9161041B2|2011-01-09|2015-10-13|Mediatek Inc.|Apparatus and method of efficient sample adaptive offset|
US9055305B2|2011-01-09|2015-06-09|Mediatek Inc.|Apparatus and method of sample adaptive offset for video coding|
US8660174B2|2010-06-15|2014-02-25|Mediatek Inc.|Apparatus and method of adaptive offset for video coding|
US9813738B2|2010-10-05|2017-11-07|Hfi Innovation Inc.|Method and apparatus of adaptive loop filtering|
US8861617B2|2010-10-05|2014-10-14|Mediatek Inc|Method and apparatus of region-based adaptive loop filtering|
CN103392338A|2011-01-03|2013-11-13|联发科技股份有限公司|Method of filter-unit based in-loop filtering|
US20120294353A1|2011-05-16|2012-11-22|Mediatek Inc.|Apparatus and Method of Sample Adaptive Offset for Luma and Chroma Components|
US9432699B2|2011-05-18|2016-08-30|Nokia Technologies Oy|Methods, apparatuses and computer programs for video coding|
EP3678373A1|2011-06-20|2020-07-08|HFI Innovation Inc.|Method and apparatus of directional intra prediction|
KR101785770B1|2011-06-23|2017-10-16|후아웨이 테크놀러지 컴퍼니 리미티드|Offset decoding device, offset encoding device, image filter device, and data structure|
KR101444675B1|2011-07-01|2014-10-01|에스케이 텔레콤주식회사|Method and Apparatus for Encoding and Decoding Video|
US9344743B2|2011-08-24|2016-05-17|Texas Instruments Incorporated|Flexible region based sample adaptive offset and adaptive loop filter |
JP5685682B2|2011-10-24|2015-03-18|株式会社Gnzo|Video signal encoding system and encoding method|
EP2773118B1|2011-10-24|2020-09-16|Innotive Ltd|Method and apparatus for image decoding|
US8811760B2|2011-10-25|2014-08-19|Mitsubishi Electric Research Laboratories, Inc.|Coding images using intra prediction modes|
CN109120927B|2011-11-04|2021-05-18|夏普株式会社|Image decoding device, image decoding method, and image encoding device|
WO2013070629A1|2011-11-07|2013-05-16|Huawei Technologies Co., Ltd.|New angular table for improving intra prediction|
EP2800372A4|2011-12-30|2015-12-09|Humax Holdings Co Ltd|Method and device for encoding three-dimensional image, and decoding method and device|
US9621894B2|2012-01-13|2017-04-11|Qualcomm Incorporated|Determining contexts for coding transform coefficient data in video coding|
KR101764037B1|2012-04-13|2017-08-01|미쓰비시덴키 가부시키가이샤|Moving image encoding device, moving image decoding device, moving image encoding method, moving image decoding method and storage medium|
KR20140034053A|2012-08-21|2014-03-19|삼성전자주식회사|Method and appratus for inter-layer encoding of prediction information in scalable video encoding based on coding units of tree structure, method and appratus for inter-layer decoding of prediction information in scalable video decoding based on coding units of tree structure|
KR102331649B1|2013-01-04|2021-12-01|지이 비디오 컴프레션, 엘엘씨|Efficient scalable coding concept|
US10321130B2|2013-01-07|2019-06-11|Vid Scale, Inc.|Enhanced deblocking filters for video coding|
US9615086B2|2013-02-06|2017-04-04|Research & Business Foundation Sungkyunkwan University|Method and apparatus for intra prediction|
CN105308968B|2013-04-08|2019-06-14|Ge视频压缩有限责任公司|Allow effective multiple view/layer coding Coded concepts|
RU2015107755A|2013-07-12|2016-09-27|Сони Корпорейшн|DEVICE FOR PROCESSING IMAGES AND METHOD FOR PROCESSING IMAGES|
US9756359B2|2013-12-16|2017-09-05|Qualcomm Incorporated|Large blocks and depth modeling modes in 3D video coding|
US9654782B2|2014-11-07|2017-05-16|Qualcomm Incorporated|Low complexity coding based on dynamic mode decision branching|GB2596767A|2015-08-28|2022-01-05|Kt Corp|Method and device for processing video signal|
CN108781299A|2015-12-31|2018-11-09|联发科技股份有限公司|Method and apparatus for video and the prediction binary tree structure of image coding and decoding|
KR20190029737A|2016-09-22|2019-03-20|엘지전자 주식회사|Method and apparatus for inter-prediction based on illumination compensation in video coding system|
US10382781B2|2016-09-28|2019-08-13|Qualcomm Incorporated|Interpolation filters for intra prediction in video coding|
US10951912B2|2016-10-05|2021-03-16|Qualcomm Incorporated|Systems and methods for adaptive selection of weights for video coding|
KR20180080115A|2017-01-02|2018-07-11|한양대학교 산학협력단|Intraprediction method and apparatus for performing adaptive filtering on reference pixels|
EP3646587A1|2017-07-04|2020-05-06|Huawei Technologies Co., Ltd.|Decoder side intra mode derivation tool line memory harmonization with deblocking filter|
KR20190113653A|2018-03-27|2019-10-08|주식회사 케이티|Method and apparatus for processing video signal|
US20200228832A1|2019-01-11|2020-07-16|Mediatek Inc.|Method and Apparatus of Subblock Deblocking in Video Coding|
KR20210100741A|2019-02-21|2021-08-17|엘지전자 주식회사|Video decoding method and apparatus using intra prediction in video coding system|
US11178399B2|2019-03-12|2021-11-16|Qualcomm Incorporated|Probability initialization for video coding|
US20200329248A1|2019-04-15|2020-10-15|Tencent America LLC|Method and apparatus for video coding|
CN113056912A|2019-10-09|2021-06-29|株式会社 Xris|Method for encoding/decoding image signal and apparatus therefor|
法律状态:
2021-07-22| BA2A| Patent application published|Ref document number: 2844525 Country of ref document: ES Kind code of ref document: A2 Effective date: 20210722 |
2021-10-08| EC2A| Search report published|Ref document number: 2844525 Country of ref document: ES Kind code of ref document: R1 Effective date: 20211001 |
优先权:
申请号 | 申请日 | 专利标题
KR20150128964|2015-09-11|
KR20150129439|2015-09-14|
[返回顶部]